4 research outputs found

    An automatic wearable multi-sensor based gait analysis system for older adults.

    Get PDF
    Gait abnormalities in older adults are very common in clinical practice. They lead to serious adverse consequences such as falls and injury resulting in increased care cost. There is therefore a national imperative to address this challenge. Currently gait assessment is done using standardized clinical tools dependent on subjective evaluation. More objective gold standard methods (motion capture systems such as Qualisys and Vicon) to analyse gait rely on access to expensive complex equipment based in gait laboratories. These are not widely available for several reasons including a scarcity of equipment, need for technical staff, need for patients to attend in person, complicated time consuming procedures and overall expense. To broaden the use of accurate quantitative gait monitoring and assessment, the major goal of this thesis is to develop an affordable automatic gait analysis system that will provide comprehensive gait information and allow use in clinic or at home. It will also be able to quantify and visualize gait parameters, identify gait variables and changes, monitor abnormal gait patterns of older people in order to reduce the potential for falling and support falls risk management. A research program based on conducting experiments on volunteers is developed in collaboration with other researchers in Bournemouth University, The Royal Bournemouth Hospital and care homes. This thesis consists of five different studies toward addressing our major goal. Firstly, a study on the effects on sensor output from an Inertial Measurement Unit (IMU) attached to different anatomical foot locations. Placing an IMU over the bony prominence of the first cuboid bone is the best place as it delivers the most accurate data. Secondly, an automatic gait feature extraction method for analysing spatiotemporal gait features which shows that it is possible to extract gait features automatically outside of a gait laboratory. Thirdly, user friendly and easy to interpret visualization approaches are proposed to demonstrate real time spatiotemporal gait information. Four proposed approaches have the potential of helping professionals detect and interpret gait asymmetry. Fourthly, a validation study of spatiotemporal IMU extracted features compared with gold standard Motion Capture System and Treadmill measurements in young and older adults is conducted. The results obtained from three experimental conditions demonstrate that our IMU gait extracted features are highly valid for spatiotemporal gait variables in young and older adults. In the last study, an evaluation system using Procrustes and Euclidean distance matrix analysis is proposed to provide a comprehensive interpretation of shape and form differences between individual gaits. The results show that older gaits are distinguishable from young gaits. A pictorial and numerical system is proposed which indicates whether the assessed gait is normal or abnormal depending on their total feature values. This offers several advantages: 1) it is user friendly and is easy to set up and implement; 2) it does not require complex equipment with segmentation of body parts; 3) it is relatively inexpensive and therefore increases its affordability decreasing health inequality; and 4) its versatility increases its usability at home supporting inclusivity of patients who are home bound. A digital transformation strategy framework is proposed where stakeholders such as patients, health care professionals and industry partners can collaborate through development of new technologies, value creation, structural change, affordability and sustainability to improve the diagnosis and treatment of gait abnormalities

    An Enhanced Ensemble Deep Neural Network Approach for Elderly Fall Detection System Based on Wearable Sensors

    Get PDF
    Fatal injuries and hospitalizations caused by accidental falls are significant problems among the elderly. Detecting falls in real-time is challenging, as many falls occur in a short period. Developing an automated monitoring system that can predict falls before they happen, provide safeguards during the fall, and issue remote notifications after the fall is essential to improving the level of care for the elderly. This study proposed a concept for a wearable monitoring framework that aims to anticipate falls during their beginning and descent, activating a safety mechanism to minimize fall-related injuries and issuing a remote notification after the body impacts the ground. However, the demonstration of this concept in the study involved the offline analysis of an ensemble deep neural network architecture based on a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) and existing data. It is important to note that this study did not involve the implementation of hardware or other elements beyond the developed algorithm. The proposed approach utilized CNN for robust feature extraction from accelerometer and gyroscope data and RNN to model the temporal dynamics of the falling process. A distinct class-based ensemble architecture was developed, where each ensemble model identified a specific class. The proposed approach was evaluated on the annotated SisFall dataset and achieved a mean accuracy of 95%, 96%, and 98% for Non-Fall, Pre-Fall, and Fall detection events, respectively, outperforming state-of-the-art fall detection methods. The overall evaluation demonstrated the effectiveness of the developed deep learning architecture. This wearable monitoring system will prevent injuries and improve the quality of life of elderly individuals

    Deep learning enabled fall detection exploiting gait analysis

    Get PDF
    Falls associated injuries often result not only increasing the medical-, social- and care-cost but also loss of mobility, impair chronic health and even potential risk of fatality. Because of elderly population growth, it is one of the major global public health problems. To address such issue, we present a Deep Learning enabled Fall Detection (DLFD) method exploiting Gait Analysis. More in details, firstly, we propose a framework for fall detection system. Secondly, we discussed the proposed DLFD method which exploits fall and non-fall RGB video to extract gait features using MediaPipe framework, applies normalization algorithm and classifies using bi-directional Long Short-Term Memory (bi-LSTM) model. Finally, the model is tested on collected three public datasets of 434mathrm{x}2 videos(more than 1 million frames) which consists of different activities and varieties of falls. The experimental results show that the model can achieve the accuracy of 96.35% and reveals the effectiveness of the proposal. This could play a significant role to alleviate falls problem by immediate alerting to emergency and relevant teams for taking necessary actions. This will speed up the assistance proceedings, reduce the risk of prolonged injury and save lives

    YOLO-Fish: A robust fish detection model to detect fish in realistic underwater environment

    No full text
    Over the last few years, several research works have been performed to monitor fish in the underwater environment aimed for marine research, understanding ocean geography, and primarily for sustainable fisheries. Automating fish identification is very helpful, considering the time and cost of the manual process. However, it can be challenging to differentiate fish from the seabed and fish types from each other due to environmental challenges like low illumination, complex background, high variation in luminosity, free movement of fish, and high diversity of fish species. In this paper, we propose YOLO-Fish, a deep learning based fish detection model. We have proposed two models, YOLO-Fish-1 and YOLO-Fish-2. YOLO-Fish-1 enhances YOLOv3 by fixing the issue of upsampling step sizes of to reduce the misdetection of tiny fish. YOLO-Fish-2 further improves the model by adding Spatial Pyramid Pooling to the first model to add the capability to detect fish appearance in those dynamic environments. To test the models, we introduce two datasets: DeepFish and OzFish. The DeepFish dataset contains around 15k bounding box annotations across 4505 images, where images belong to 20 different fish habitats. The OzFish is another dataset comprised of about 43k bounding box annotations of wide varieties of fish across around 1800 images. YOLO-Fish1 and YOLO-Fish2 achieved average precision of 76.56% and 75.70%, respectively for fish detection in unconstrained real-world marine environments, which is significantly better than YOLOv3. Both of these models are lightweight compared to recent versions of YOLO like YOLOv4, yet the performances are very similar
    corecore